image-based sexual abuse
Grok Is Being Used to Mock and Strip Women in Hijabs and Sarees
A substantial number of AI images generated or edited with Grok are targeting women in religious and cultural clothing. Among the vast and growing library of nonconsensual sexualized edits that Grok has generated on request over the past week, many perpetrators have asked xAI's bot to put on or take off a hijab, a saree, a nun's habit, or another kind of modest religious or cultural type of clothing. In a review of 500 Grok images generated between January 6 and January 9, WIRED found around 5 percent of the output featured an image of a woman who was, as the result of prompts from users, either stripped from or made to wear religious or cultural clothing. Indian sarees and modest Islamic wear were the most common examples in the output, which also featured Japanese school uniforms, burqas, and early 20th century-style bathing suits with long sleeves. "Women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos prior to deepfakes and even with deepfakes, because of the way that society and particularly misogynistic men view women of color as less human and less worthy of dignity," says Noelle Martin, a lawyer and PhD candidate at the University of Western Australia researching the regulation of deepfake abuse.
- Oceania > Australia > Western Australia (0.24)
- South America > Venezuela (0.04)
- North America > United States > California (0.04)
- (2 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults
Ding, Michelle L., Suresh, Harini
In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, colloquially known as "deep fake pornography." We identify a "malicious technical ecosystem" or "MTE," comprising of open-source face-swapping models and nearly 200 "nudifying" software programs that allow non-technical users to create AIG-NCII within minutes. Then, using the National Institute of Standards and Technology (NIST) AI 100-4 report as a reflection of current synthetic content governance methods, we show how the current landscape of practices fails to effectively regulate the MTE for adult AIG-NCII, as well as flawed assumptions explaining these gaps.
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture > Yokohama (0.05)
- North America > United States > Rhode Island > Providence County > Providence (0.05)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.05)
- (4 more...)
White House gets voluntary commitments from AI companies to curb deepfake porn
The White House released a statement today outlining commitments that several AI companies are making to curb the creation and distribution of image-based sexual abuse. The participating businesses have laid out the steps they are taking to prevent their platforms from being used to generate non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM). Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they'll be: All of the aforementioned except Common Crawl also agreed they'd be: "incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse" It's a voluntary commitment, so today's announcement doesn't create any new actionable steps or consequences for failing to follow through on those promises. But it's still worth applauding a good faith effort to tackle this serious problem. The notable absences from today's White House release are Apple, Amazon, Google and Meta. Many big tech and AI companies have been making strides to make it easier for victims of NCII to stop the spread of deepfake images and videos separately from this federal effort.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
We're Completely Unprepared for the Deepfake Porn Boom
Last week, A.I.–generated nude images of pop superstar Taylor Swift were produced and distributed without her consent. They circulated throughout the internet, with one single post on X (née Twitter) garnering 45 million views before the site took it down. Deepfakes, as they've come to be called in recent years, often target female celebrities, but with the rise of A.I., it's easier than ever for everyday people (almost always women) to be targeted. Last year, more than 143,000 deepfake porn videos were created, according to one estimate from the independent researcher Genevieve Oh, more than every other previous year combined. That number will, in all likelihood, only continue to rise.
- Leisure & Entertainment (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (0.94)
- Media > Music (0.92)
Attitudes Towards and Knowledge of Non-Consensual Synthetic Intimate Imagery in 10 Countries
Umbach, Rebecca, Henry, Nicola, Beard, Gemma, Berryessa, Colleen
Deepfake technology tools have become ubiquitous, "democratizing" the ability to manipulate images and videos. One popular use of such technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet. This article examines attitudes and behaviors related to non-consensual synthetic intimate imagery (NSII) across over 16,000 respondents in 10 countries. Despite nascent societal awareness of NSII, NSII behaviors were considered harmful. In regards to prevalence, 2.2% of all respondents indicated personal victimization, and 1.8% all of respondents indicated perpetration behaviors. Respondents from countries with relevant legislation also reported perpetration and victimization experiences, suggesting legislative action alone is not a sufficient solution to deter perpetration. Technical considerations to reduce harms may include suggestions for how individuals can better monitor their presence online, as well as enforced platform policies which ban, or allow for removal of, NSII content.
- Oceania > Australia (0.15)
- Asia > South Korea (0.15)
- North America > United States > California (0.14)
- (19 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Law > Statutes (0.88)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
Europe Has Traded Away Its Online Porn Law
When someone Ine s Marinho trusted shared an intimate video of her online without her consent in 2019, she compared it to a chronic disease she would have to live with for the rest of her life. First the video was shared through WhatsApp, then Telegram and Twitter. Eventually it found its way onto popular porn platforms, including Pornhub and XVideos. "It didn't show my face, but it had my name on it," says Marinho, who is based in Lisbon, Portugal. After she wrestled with each platform to get the video taken down, Marinho founded an organization called #NaoPartilhes (#DoNotShare) that helps other people who have faced this type of abuse and runs education sessions in schools.
AI can now create fake porn, making revenge porn even more complicated
In January this year, a new app was released that gives users the ability to swap out faces in a video with a different face obtained from another photo or video – similar to Snapchat's "face swap" feature. It's an everyday version of the kind of high-tech computer-generated imagery (CGI) we see in the movies. You might recognise it from the cameo of a young Princess Leia in the 2016 Star Wars film Rogue One, which used the body of another actor and footage from the first Star Wars film created 39 years earlier. Now, anyone with a high-powered computer, a graphics processing unit (GPU) and time on their hands can create realistic fake videos – known as "deepfakes" – using artificial intelligence (AI). The problem is that these same tools are accessible to those who seek to create non-consensual pornography of friends, work colleagues, classmates, ex-partners and complete strangers – and post it online. Read more: The picture of who is affected by'revenge porn' is more complex than we first thought In December 2017, Motherboard broke the story of a Reddit user known as "deep fakes", who used AI to swap the faces of actors in pornographic videos with the faces of well-known celebrities.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)